6VecLM: Language Modeling in Vector Space for IPv6 Target Generation
نویسندگان
چکیده
Fast IPv6 scanning is challenging in the field of network measurement as it requires exploring whole address space but limited by current computational power. Researchers propose to obtain possible active target candidate sets probe algorithmically analyzing seed sets. However, addresses lack semantic information and contain numerous addressing schemes, leading difficulty designing effective algorithms. In this paper, we introduce our approach 6VecLM explore achieving such generation The architecture can map into a vector interpret relationships uses Transformer build language models for predicting sequence. Experiments indicate that perform classification on space. By adding new approach, model possesses controllable word innovation capability compared conventional models. work outperformed state-of-the-art algorithms two datasets reaching more quality
منابع مشابه
Improved language identification using support vector machines for language modeling
Automatic language identification (LID) decisions are made based on scores of language models (LM). In our previous paper [1], we have shown that replacing n-gram LMs with SVMs significantly improved performance of both the PPRLM and GMMtokenization-based LID systems when tested on the OGI-TS corpus. However, the relatively small corpus size may limit the general applicability of the findings. ...
متن کاملTree-based Target Language Modeling
In this paper we describe an approach to target language modeling which is based on a large treebank. We assume a bag of bags as input for the target language generation component, leaving it up to this component to decide upon word and phrase order. An experiment with Dutch as target language shows that this approach to candidate translation reranking outperforms standard n-gram modeling, when...
متن کاملA State-space Method for Language Modeling
In this paper, a new state-space method for language modeling is presented. The complexity of the model is controlled by choosing the dimension of the state instead of the smoothing and back-off methods common in n-gram modeling. The model complexity also controls the generalization ability of the model, allowing the model to handle similar words in similar manner. We compare the state-space mo...
متن کاملContext Modeling for Language and Speech Generation
This paper discusses the various ways in which the generation of an utterance requires modeling of the linguistic context of the utterance. To illustrate the role of context modeling in monologue generation, the so-called Dial-Your-Disc (DYD) system is presented, which supports browsing through a large database of musical information and generates a spoken monologue once a musical composition h...
متن کاملBackbone Language Modeling for Constrained Natural Language Generation
Recent language models, especially those based on recurrent neural networks (RNNs), make it possible to generate natural language from a learned probability. Language generation has wide applications including machine translation, summarization, question answering, conversation systems, etc. Existing methods typically learn a joint probability of words conditioned on additional information, whi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Lecture Notes in Computer Science
سال: 2021
ISSN: ['1611-3349', '0302-9743']
DOI: https://doi.org/10.1007/978-3-030-67667-4_12